Parameter calibration with stochastic gradient descent for interacting particle systems driven by neural networks

نویسندگان

چکیده

Abstract We propose a neural network approach to model general interaction dynamics and an adjoint-based stochastic gradient descent algorithm calibrate its parameters. The parameter calibration problem is considered as optimal control that investigated from theoretical numerical point of view. prove the existence controls, derive corresponding first-order optimality system formulate identify parameters for given data sets. To validate approach, we use real sets traffic crowd fit results are compared forces well-known models such Lighthill–Whitham–Richards social force motion.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Particle Gradient Descent for Infinite Ensembles

The superior performance of ensemble methods with infinite models are well known. Most of these methods are based on optimization problems in infinite-dimensional spaces with some regularization, for instance, boosting methods and convex neural networks use L1-regularization with the non-negative constraint. However, due to the difficulty of handling L1-regularization, these problems require ea...

متن کامل

Natural Gradient Descent for Training Stochastic Complex-Valued Neural Networks

In this paper, the natural gradient descent method for the multilayer stochastic complex-valued neural networks is considered, and the natural gradient is given for a single stochastic complex-valued neuron as an example. Since the space of the learnable parameters of stochastic complex-valued neural networks is not the Euclidean space but a curved manifold, the complex-valued natural gradient ...

متن کامل

"Oddball SGD": Novelty Driven Stochastic Gradient Descent for Training Deep Neural Networks

Stochastic Gradient Descent (SGD) is arguably the most popular of the machine learning methods applied to training deep neural networks (DNN) today. It has recently been demonstrated that SGD can be statistically biased so that certain elements of the training set are learned more rapidly than others. In this article, we place SGD into a feedback loop whereby the probability of selection is pro...

متن کامل

Handwritten Character Recognition using Modified Gradient Descent Technique of Neural Networks and Representation of Conjugate Descent for Training Patterns

The purpose of this study is to analyze the performance of Back propagation algorithm with changing training patterns and the second momentum term in feed forward neural networks. This analysis is conducted on 250 different words of three small letters from the English alphabet. These words are presented to two vertical segmentation programs which are designed in MATLAB and based on portions (1...

متن کامل

Gradient Descent for Spiking Neural Networks

Much of studies on neural computation are based on network models of static neurons that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking networks. Here,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics of Control, Signals, and Systems

سال: 2021

ISSN: ['0932-4194', '1435-568X']

DOI: https://doi.org/10.1007/s00498-021-00309-8